27 research outputs found

    Raspberry Pi Based Intelligent Wireless Sensor Node for Localized Torrential Rain Monitoring

    Get PDF
    Wireless sensor networks are proved to be effective in long-time localized torrential rain monitoring. However, the existing widely used architecture of wireless sensor networks for rain monitoring relies on network transportation and back-end calculation, which causes delay in response to heavy rain in localized areas. Our work improves the architecture by applying logistic regression and support vector machine classification to an intelligent wireless sensor node which is created by Raspberry Pi. The sensor nodes in front-end not only obtain data from sensors, but also can analyze the probabilities of upcoming heavy rain independently and give early warnings to local clients in time. When the sensor nodes send the probability to back-end server, the burdens of network transport are released. We demonstrate by simulation results that our sensor system architecture has potentiality to increase the local response to heavy rain. The monitoring capacity is also raised

    Bayesian Information Criterion Based Feature Filtering for the Fusion of Multiple Features in High-Spatial-Resolution Satellite Scene Classification

    No full text
    This paper presents a novel classification method for high-spatial-resolution satellite scene classification introducing Bayesian information criterion (BIC)-based feature filtering process to further eliminate opaque and redundant information between multiple features. Firstly, two diverse and complementary feature descriptors are extracted to characterize the satellite scene. Then, sparse canonical correlation analysis (SCCA) with penalty function is employed to fuse the extracted feature descriptors and remove the ambiguities and redundancies between them simultaneously. After that, a two-phase Bayesian information criterion (BIC)-based feature filtering process is designed to further filter out redundant information. In the first phase, we gradually impose a constraint via an iterative process to set a constraint on the loadings for averting sparse correlation descending below to a lower confidence limit of the approximated canonical correlation. In the second phase, Bayesian information criterion (BIC) is utilized to conduct the feature filtering which sets the smallest loading in absolute value to zero in each iteration for all features. Lastly, a support vector machine with pyramid match kernel is applied to obtain the final result. Experimental results on high-spatial-resolution satellite scenes demonstrate that the suggested approach achieves satisfactory performance in classification accuracy

    PCBA-Net: Pyramidal Convolutional Block Attention Network for Synthetic Aperture Radar Image Change Detection

    No full text
    Synthetic aperture radar (SAR) imagery change detection (CD) is still a crucial and challenging task. Recently, with the boom of deep learning technologies, many deep learning methods have been presented for SAR CD, and they achieve superior performance to traditional methods. However, most of the available convolutional neural networks (CNN) approaches use diminutive and single convolution kernel, which has a small receptive field and cannot make full use of the context information and some useful detail information of SAR images. In order to address the above drawback, pyramidal convolutional block attention network (PCBA-Net) is proposed for SAR image CD in this study. The proposed PCBA-Net consists of pyramidal convolution (PyConv) and convolutional block attention module (CBAM). PyConv can not only extend the receptive field of input to capture enough context information, but also handles input with incremental kernel sizes in parallel to obtain multi-scale detailed information. Additionally, CBAM is introduced in the PCBA-Net to emphasize crucial information. To verify the performance of our proposed method, six actual SAR datasets are used in the experiments. The results of six real SAR datasets reveal that the performance of our approach outperforms several state-of-the-art methods

    WHUVID: A Large-Scale Stereo-IMU Dataset for Visual-Inertial Odometry and Autonomous Driving in Chinese Urban Scenarios

    No full text
    In this paper, we present a challenging stereo-inertial dataset collected onboard a sports utility vehicle (SUV) for the tasks of visual-inertial odometry (VIO), simultaneous localization and mapping (SLAM), autonomous driving, object detection, and other computer vision techniques. We recorded a large set of time-synchronized stereo image sequences (2 × 1280 × 720 @ 30 fps RGB) and corresponding inertial measurement unit (IMU) readings (400 Hz) from a Stereolabs ZED2 camera, along with centimeter-level-accurate six-degree-of-freedom ground truth (100 Hz) from a u-blox GNSS-IMU navigation device with real-time kinematic correction signals. The dataset comprises 34 sequences recorded during November 2020 in Wuhan, the largest city of Central China. Further, the dataset contains abundant unique urban scenes and features of a complex modern metropolis, which have rarely appeared in previously released benchmarks. Results from milestone VIO/SLAM algorithms reveal that methods exhibiting excellent performance on established datasets such as KITTI and EuRoC perform unsatisfactorily when moved outside the laboratory to the real world. We expect our dataset to reduce this limitation by providing more challenging and diverse scenarios to the research community. The full dataset with raw and calibrated data is publicly available along with a lightweight MATLAB/Python toolbox for preprocessing and evaluation. The dataset can be downloaded in its entirety from the uniform resource locator (URL) we provide in the main text

    SAR Images Statistical Modeling and Classification Based on the Mixture of Alpha-Stable Distributions

    No full text
    This paper proposes the mixture of Alpha-stable (MAS) distributions for modeling statistical property of Synthetic Aperture Radar (SAR) images in a supervised Markovian classification algorithm. Our work is motivated by the fact that natural scenes consist of various reflectors with different types that are typically concentrated within a small area, and SAR images generally exhibit sharp peaks, heavy tails, and even multimodal statistical property, especially at high resolution. Unimodal distributions do not fit such statistical property well, and thus a multimodal approach is necessary. Driven by the multimodality and impulsiveness of high resolution SAR images histogram, we utilize the mixture of Alpha-stable distributions to describe such characteristics. A pseudo-simulated annealing (PSA) estimator based on Markov chain Monte Carlo (MCMC) is present to efficiently estimate model parameters of the mixture of Alpha-stable distributions. To validate the proposed PSA estimator, we apply it to simulated data and compare its performance to that of a state-of-the-art estimator. Finally, we exploit the MAS distributions and a Markovian context for SAR images classification. The effectiveness of the proposed classifier is demonstrated by experiments on TerraSAR-X images, which verifies the validity of the MAS distributions for modeling and classification of SAR images

    Gaofen-3 PolSAR Image Classification via XGBoost and Polarimetric Spatial Information

    No full text
    The launch of the Chinese Gaofen-3 (GF-3) satellite will provide enough synthetic aperture radar (SAR) images with different imaging modes for land cover classification and other potential usages in the next few years. This paper aims to propose an efficient and practical classification framework for a GF-3 polarimetric SAR (PolSAR) image. The proposed classification framework consists of four simple parts including polarimetric feature extraction and stacking, the initial classification via XGBoost, superpixels generation by statistical region merging (SRM) based on Pauli RGB image, and a post-processing step to determine the label of a superpixel by modified majority voting. Fast initial classification via XGBoost and the incorporation of spatial information via a post-processing step through superpixel-based modified majority voting would potentially make the method efficient in practical use. Preliminary experimental results on real GF-3 PolSAR images and the AIRSAR Flevoland data set validate the efficacy and efficiency of the proposed classification framework. The results demonstrate that the quality of GF-3 PolSAR data is adequate enough for classification purpose. The results also show that the incorporation of spatial information is important for overall performance improvement

    AMN: Attention Metric Network for One-Shot Remote Sensing Image Scene Classification

    No full text
    In recent years, deep neural network (DNN) based scene classification methods have achieved promising performance. However, the data-driven training strategy requires a large number of labeled samples, making the DNN-based methods unable to solve the scene classification problem in the case of a small number of labeled images. As the number and variety of scene images continue to grow, the cost and difficulty of manual annotation also increase. Therefore, it is significant to deal with the scene classification problem with only a few labeled samples. In this paper, we propose an attention metric network (AMN) in the framework of the few-shot learning (FSL) to improve the performance of one-shot scene classification. AMN is composed of a self-attention embedding network (SAEN) and a cross-attention metric network (CAMN). In SAEN, we adopt the spatial attention and the channel attention of feature maps to obtain abundant features of scene images. In CAMN, we propose a novel cross-attention mechanism which can highlight the features that are more concerned about different categories, and improve the similarity measurement performance. A loss function combining mean square error (MSE) loss with multi-class N-pair loss is developed, which helps to promote the intra-class similarity and inter-class variance of embedding features, and also improve the similarity measurement results. Experiments on the NWPU-RESISC45 dataset and the RSD-WHU46 dataset demonstrate that our method achieves the state-of-the-art results on one-shot remote sensing image scene classification tasks

    Deformable ConvNet with Aspect Ratio Constrained NMS for Object Detection in Remote Sensing Imagery

    No full text
    Convolutional neural networks (CNNs) have demonstrated their ability object detection of very high resolution remote sensing images. However, CNNs have obvious limitations for modeling geometric variations in remote sensing targets. In this paper, we introduced a CNN structure, namely deformable ConvNet, to address geometric modeling in object recognition. By adding offsets to the convolution layers, feature mapping of CNN can be applied to unfixed locations, enhancing CNNs’ visual appearance understanding. In our work, a deformable region-based fully convolutional networks (R-FCN) was constructed by substituting the regular convolution layer with a deformable convolution layer. To efficiently use this deformable convolutional neural network (ConvNet), a training mechanism is developed in our work. We first set the pre-trained R-FCN natural image model as the default network parameters in deformable R-FCN. Then, this deformable ConvNet was fine-tuned on very high resolution (VHR) remote sensing images. To remedy the increase in lines like false region proposals, we developed aspect ratio constrained non maximum suppression (arcNMS). The precision of deformable ConvNet for detecting objects was then improved. An end-to-end approach was then developed by combining deformable R-FCN, a smart fine-tuning strategy and aspect ratio constrained NMS. The developed method was better than a state-of-the-art benchmark in object detection without data augmentation

    Dynamic Traffic Detection and Modeling for Beidou Satellite Networks

    No full text
    Beidou navigation system (BDS) has been developed as an integrated system. The third BDS, BSD-3, will be capable of providing not only global positioning and navigation but also data communication. When the volume of data transmitted through BDS-3 continues to increase, BDS-3 will encounter network traffic congestion, unbalanced resource usage, or security attacks as terrestrial networks. The network traffic monitoring is essential for automatic management and safety assurance of BDS-3. A dynamic traffic detection method including traffic prediction by Long Short-Term Memory (LSTM) and a dynamically adjusting polling strategy is proposed to unevenly sample the traffic of each link. A distributed traffic detection architecture is designed for collection of the detected traffic and its related temporal and spatial information with low delay. A time-varying graph (TVG) model is introduced to represent the dynamic topology, the time-varying link, and its traffic. The BDS-3 network is simulated by STK. The WIDE dataset is used to simulate the traffic between the satellite and ground station. Simulation results show that the dynamic traffic detection method can follow the variation of the traffic of each link with uneven sampling. The detected traffic can be transmitted to the ground station in near real time through the distributed traffic detection architecture. The traffic and its related information are stored by using Neo4j in terms of the TVG model. The nodes, edges, and traffic of BDS-3 can be quickly queried through Neo4j. The presented dynamic traffic detection and representation schemes will support BDS-3 to establish automatic management and security system and develop business

    Individual Building Extraction from TerraSAR-X Images Based on Ontological Semantic Analysis

    No full text
    Accurate building information plays a crucial role for urban planning, human settlements and environmental management. Synthetic aperture radar (SAR) images, which deliver images with metric resolution, allow for analyzing and extracting detailed information on urban areas. In this paper, we consider the problem of extracting individual buildings from SAR images based on domain ontology. By analyzing a building scattering model with different orientations and structures, the building ontology model is set up to express multiple characteristics of individual buildings. Under this semantic expression framework, an object-based SAR image segmentation method is adopted to provide homogeneous image objects, and three categories of image object features are extracted. Semantic rules are implemented by organizing image object features, and the individual building objects expression based on an ontological semantic description is formed. Finally, the building primitives are used to detect buildings among the available image objects. Experiments on TerraSAR-X images of Foshan city, China, with a spatial resolution of 1.25 m × 1.25 m, have shown the total extraction rates are above 84%. The results indicate the ontological semantic method can exactly extract flat-roof and gable-roof buildings larger than 250 pixels with different orientations
    corecore